On the divergence of line search methods
نویسنده
چکیده
We discuss the convergence of line search methods for minimization. We explain how Newton’s method and the BFGS method can fail even if the restrictions of the objective function to the search lines are strictly convex functions, the level sets of the objective functions are compact, the line searches are exact and the Wolfe conditions are satisfied. This explanation illustrates a new way to combine general mathematical concepts and symbolic computation to analyze the convergence of line search methods. It also illustrate the limitations of the asymptotic analysis of the iterates of nonlinear programming algorithms. Mathematics Subject Classification (2000) 20E28 · 20G40 · 20C20
منابع مشابه
A Hybrid Unconscious Search Algorithm for Mixed-model Assembly Line Balancing Problem with SDST, Parallel Workstation and Learning Effect
Due to the variety of products, simultaneous production of different models has an important role in production systems. Moreover, considering the realistic constraints in designing production lines attracted a lot of attentions in recent researches. Since the assembly line balancing problem is NP-hard, efficient methods are needed to solve this kind of problems. In this study, a new hybrid met...
متن کاملA Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کاملThe divergence of the BFGS and Gauss Newton methods
We present examples of divergence for the BFGS and Gauss Newton methods. These examples have objective functions with bounded level sets and other properties concerning the examples published recently in this journal, like unit steps and convexity along the search lines. As these other examples, the iterates, function values and gradients in the new examples fit into the general formulation in ...
متن کاملExtensions of the Hestenes-Stiefel and Polak-Ribiere-Polyak conjugate gradient methods with sufficient descent property
Using search directions of a recent class of three--term conjugate gradient methods, modified versions of the Hestenes-Stiefel and Polak-Ribiere-Polyak methods are proposed which satisfy the sufficient descent condition. The methods are shown to be globally convergent when the line search fulfills the (strong) Wolfe conditions. Numerical experiments are done on a set of CUTEr unconstrained opti...
متن کاملروش به روز رسانی متقارن از مرتبه اول برای حل مسایل بهینه سازی مقیاس بزرگ
The search for finding the local minimization in unconstrained optimization problems and a fixed point of the gradient system of ordinary differential equations are two close problems. Limited-memory algorithms are widely used to solve large-scale problems, while Rang Kuta's methods are also used to solve numerical differential equations. In this paper, using the concept of sub-space method and...
متن کاملPrimal-Dual Interior Methods for Nonconvex Nonlinear Programming
Recently, infeasibility issues in interior methods for nonconvex nonlinear programming have been studied. In particular, it has been shown how many line-search interior methods may converge to an infeasible point which is on the boundary of the feasible region with respect to the inequality constraints. The convergence is such that the search direction does not tend to zero, but the step length...
متن کامل